27 research outputs found

    Low-rank Approximation of Linear Maps

    Full text link
    This work provides closed-form solutions and minimal achievable errors for a large class of low-rank approximation problems in Hilbert spaces. The proposed theorem generalizes to the case of linear bounded operators and p-th Schatten norms previous results obtained in the finite dimensional case for the Frobenius norm. The theorem is illustrated in various settings, including low-rank approximation problems with respect to the trace norm, the 2-induced norm or the Hilbert-Schmidt norm. The theorem provides also the basics for the design of tractable algorithms for kernel-based or continuous DM

    Coherence-based Partial Exact Recovery Condition for OMP/OLS

    Get PDF
    We address the exact recovery of the support of a k-sparse vector with Orthogonal Matching Pursuit (OMP) and Orthogonal Least Squares (OLS) in a noiseless setting. We consider the scenario where OMP/OLS have selected good atoms during the first l iterations (l<k) and derive a new sufficient and worst-case necessary condition for their success in k steps. Our result is based on the coherence \mu of the dictionary and relaxes Tropp's well-known condition \mu<1/(2k-1) to the case where OMP/OLS have a partial knowledge of the support

    Soft Bayesian Pursuit Algorithm for Sparse Representations

    Get PDF
    International audienceThis paper deals with sparse representations within a Bayesian framework. For a Bernoulli-Gaussian model, we here propose a method based on a mean-field approximation to estimate the support of the signal. In numerical tests involving a recovery problem, the resulting algorithm is shown to have good performance over a wide range of sparsity levels, compared to various state-of-the-art algorithms

    Structured Bayesian Orthogonal Matching Pursuit

    Get PDF
    International audienceTaking advantage of the structures inherent in many sparse decompositions constitutes a promising research axis. In this paper, we address this problem from a Bayesian point of view. We exploit a Boltzmann machine, allowing to take a large variety of structures into account, and focus on the resolution of a joint maximum a posteriori problem. The proposed algorithm, called Structured Bayesian Orthogonal Matching Pursuit (SBOMP), is a structured extension of the Bayesian Orthogonal Matching Pursuit algorithm (BOMP) introduced in our previous work. In numerical tests involving a recovery problem, SBOMP is shown to have good performance over a wide range of sparsity levels while keeping a reasonable computational complexit

    Soft Bayesian Pursuit Algorithm for Sparse Representations

    Get PDF
    International audienceThis paper deals with sparse representations within a Bayesian framework. For a Bernoulli-Gaussian model, we here propose a method based on a mean-field approximation to estimate the support of the signal. In numerical tests involving a recovery problem, the resulting algorithm is shown to have good performance over a wide range of sparsity levels, compared to various state-of-the-art algorithms

    Bayesian Estimation of Turbulent Motion

    Full text link

    Spatial intra-prediction based on mixtures of sparse representations

    Get PDF
    Abstract-In this paper, we consider the problem of spatial prediction based on sparse representations. Several algorithms dealing with this problem can be found in the literature. We propose a novel method involving a mixture of sparse representations. We first place this approach into a probabilistic framework and then derive a practical procedure to solve it. Comparisons of the rate-distortion performance show the superiority of the proposed algorithm with regard to other stateof-the-art algorithms

    Joint Screening Tests for LASSO

    Get PDF
    International audienceThis paper focusses on " safe " screening techniques for the LASSO problem. Motivated by the need for low-complexity algorithms, we propose a new approach, dubbed " joint screening test " , allowing to screen a set of atoms by carrying out one single test. The approach is particularized to two different sets of atoms, respectively expressed as sphere and dome regions. After presenting the mathematical derivations of the tests, we elaborate on their relative effectiveness and discuss the practical use of such procedures

    An instance optimality property for approximation problems with multiple approximation subspaces

    Get PDF
    Model-order reduction methods tackle the following general approximation problem: find an "easily-computable" but accurate approximationˆh approximationˆ approximationˆh of some target solution h. In order to achieve this goal, standard method-ologies combine two main ingredients: i) a set of problem-specific constraints; ii) some "simple" prior model on the set of target solutions. The most common prior model encountered in the literature assume that the target solution h is "close" to some low-dimensional subspace. Recently, triggered by the work by Binev et al. [5], several contributions have shown that refined prior models (based on a set of embedded approximation subspaces) may lead to enhanced approximation performance. Unfortunately, to date, no theoretical results have been derived to support the good empirical performance observed in these contributions. The goal of this work is to fill this gap. More specifically, we provide a mathematical characterization of the approximation performance achievable by some particular "multi-space" decoder and emphasize that, in some specific setups, this "multi-space" decoder has provably better recovery guarantees than its standard counterpart based on a single approximation subspace
    corecore